11 research outputs found
SKTR: Trace Recovery from Stochastically Known Logs
Developments in machine learning together with the increasing usage of sensor
data challenge the reliance on deterministic logs, requiring new process mining
solutions for uncertain, and in particular stochastically known, logs. In this
work we formulate {trace recovery}, the task of generating a deterministic log
from stochastically known logs that is as faithful to reality as possible. An
effective trace recovery algorithm would be a powerful aid for maintaining
credible process mining tools for uncertain settings. We propose an algorithmic
framework for this task that recovers the best alignment between a
stochastically known log and a process model, with three innovative features.
Our algorithm, SKTR, 1) handles both Markovian and non-Markovian processes; 2)
offers a quality-based balance between a process model and a log, depending on
the available process information, sensor quality, and machine learning
predictiveness power; and 3) offers a novel use of a synchronous product
multigraph to create the log. An empirical analysis using five publicly
available datasets, three of which use predictive models over standard video
capturing benchmarks, shows an average relative accuracy improvement of more
than 10 over a common baseline.Comment: Submitted version -- Accepted to the 5th International Conference on
Process Mining (ICPM), 202
Adaptive Learning for the Resource-Constrained Classification Problem
Resource-constrained classification tasks are common in real-world
applications such as allocating tests for disease diagnosis, hiring decisions
when filling a limited number of positions, and defect detection in
manufacturing settings under a limited inspection budget. Typical
classification algorithms treat the learning process and the resource
constraints as two separate and sequential tasks. Here we design an adaptive
learning approach that considers resource constraints and learning jointly by
iteratively fine-tuning misclassification costs. Via a structured experimental
study using a publicly available data set, we evaluate a decision tree
classifier that utilizes the proposed approach. The adaptive learning approach
performs significantly better than alternative approaches, especially for
difficult classification problems in which the performance of common approaches
may be unsatisfactory. We envision the adaptive learning approach as an
important addition to the repertoire of techniques for handling
resource-constrained classification problems
The biggest business process management problems to solve before we die
It may be tempting for researchers to stick to incremental extensions of their current work to plan future research activities. Yet there is also merit in realizing the grand challenges in one’s field. This paper presents an overview of the nine major research problems for the Business Process Management discipline. These challenges have been collected by an open call to the community, discussed and refined in a workshop setting, and described here in detail, including a motivation why these problems are worth investigating. This overview may serve the purpose of inspiring both novice and advanced scholars who are interested in the radical new ideas for the analysis, design, and management of work processes using information technology
The biggest business process management problems to solve before we die
It may be tempting for researchers to stick to incremental extensions of their current work to plan future research activities. Yet there is also merit in realizing the grand challenges in one's field. This paper presents an overview of the nine major research problems for the Business Process Management discipline. These challenges have been collected by an open call to the community, discussed and refined in a workshop setting, and described here in detail, including a motivation why these problems are worth investigating. This overview may serve the purpose of inspiring both novice and advanced scholars who are interested in the radical new ideas for the analysis, design, and management of work processes using information technology
On sourcing and stocking policies in a two-echelon, multiple location, repairable parts supply chain
<p>This research develops policies to minimize spare part purchases and repair costs for maintaining a fleet of mission-critical systems that operate from multiple forward (base) locations within a two-echelon repairable supply chain with a central depot. We take a tactical planning perspective to support periodic decisions for spare part purchases and repair sourcing, where the repair capabilities of the various locations are overlapping. We consider three policy classes: a central policy, where all repairs are sourced to a central depot; a local policy, whereby failures are repaired at forward locations; and a mixed policy, where a fraction of the parts is repaired at the bases and the remainder is repaired at the depot. Parts are classified based on their repair cost and lead time. For each part class, we suggest a solution that is based on threshold policies or on the use of a heuristic solution algorithm that extends the industry standard of marginal analysis to determine spare parts positioning by including repair fraction sourcing. A validation study shows that the suggested heuristic performs well compared to an exhaustive search (an average 0.2% difference in cost). An extensive numerical study demonstrates that the algorithm achieves costs which are lower by about 7–12% on average, compared to common, rule-based sourcing policies.</p
An adaptive robust optimization model for parallel machine scheduling
Real-life parallel machine scheduling problems can be characterized by: (i) limited information about the exact task duration at the scheduling time, and (ii) an opportunity to reschedule the remaining tasks each time a task processing is completed and a machine becomes idle. Robust optimization is the natural methodology to cope with the first characteristic of duration uncertainty, yet the existing literature on robust scheduling does not explicitly consider the second characteristic the possibility to adjust decisions as more information about the tasks duration becomes available, despite that re-optimizing the schedule every time new information emerges is standard practice. In this paper, we develop an adaptive robust optimization scheduling approach that takes into account, at the beginning of the planning horizon, the possibility that scheduling decisions can be adjusted. We demonstrate that the suggested approach can lead to better here-and-now decisions and better makespan guarantees. To that end, we develop the first mixed integer linear programming model for adaptive robust scheduling, and a two-stage approximation heuristic, where we minimize the worst-case makespan. Using this model, we show via a numerical study that adaptive scheduling leads to solutions with better and more stable makespan realizations compared to static approaches.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Discrete Mathematics and Optimizatio
Minimizing mortality in a mass casualty event: fluid networks in support of modeling and staffing
<div><p>The demand for medical treatment of casualties in mass casualty events (MCEs) exceeds resource supply. A key requirement in the management of such tragic but frequent events is thus the efficient allocation of scarce resources. This article develops a mathematical fluid model that captures the operational performance of a hospital during an MCE. The problem is how to allocate the surgeons—the scarcest of resources—between two treatment stations in order to minimize mortality. A focus is placed on casualties in need of immediate care. To this end, optimization problems are developed that are solved by combining theory with numerical analysis. This approach yields structural results that create optimal or near-optimal resource allocation policies. The results give rise to two types of policies, one that prioritizes a single treatment station throughout the MCE and a second policy in which the allocation priority changes. The approach can be implemented when preparing for MCEs and also during their real-time management when future decisions are based on current available information. The results of experiments, based on the outline of real MCEs, demonstrate that the proposed approach provides decision support tools, which are both useful and implementable.</p>
</div